When a large language model (LLM) performs complex reasoning by chain of thought (CoT), it can be highly sensitive to individual mistakes. We have had to train verifiers to address this issue. As we all know, after human inferring a conclusion, they often check it by re-verifying it, which can avoid some mistakes. We propose a new method called self-verification that uses the conclusion of the CoT as a condition to build a new sample and asks the LLM to re-predict the original conditions which be masked. We calculate an explainable verification score based on the accuracy. This method can improve the accuracy of multiple arithmetics and logical reasoning datasets when using few-shot learning. we have demonstrated that LLMs can conduct explainable self-verification of their own conclusions and achieve competitive reasoning performance. Extensive experimentals have demonstrated that our method can help multiple large language models with self-verification can avoid interference from incorrect CoT. Code is available at \url{https://github.com/WENGSYX/Self-Verification}
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
在现实世界中,签名的定向网络无处不在。但是,对于分析此类网络的方法,较少的工作提出了频谱图神经网络(GNN)方法。在这里,我们介绍了一个签名的定向拉普拉斯矩阵,我们称之为磁性签名的laplacian,作为在签名的图表上签名的laplacian的自然概括,在有向图上的磁Laplacian。然后,我们使用此矩阵来构建一种新型的光谱GNN结构,并在节点聚类和链接预测任务上进行广泛的实验。在这些实验中,我们考虑了与签名信息有关的任务,与定向信息相关的任务以及与签名和定向信息有关的任务。我们证明,我们提出的光谱GNN有效地合并了签名和定向信息,并在广泛的数据集中获得领先的性能。此外,我们提供了一种新颖的合成网络模型,我们称之为签名的定向随机块模型,以及许多基于财务时间序列中铅滞后关系的新型现实世界数据集。
translated by 谷歌翻译
网络在许多现实世界应用程序中无处不在(例如,编码信任/不信任关系的社交网络,由时间序列数据引起的相关网络)。尽管许多网络都是签名或指示的,或者两者都在图形神经网络(GNN)上缺少统一的软件包,专门为签名和定向网络设计。在本文中,我们提出了Pytorch几何签名的指示,这是一个填补此空白的软件包。在此过程中,我们还提供了简短的审查调查,以分析签名和定向网络的分析,讨论相关实验中使用的数据,提供提出的方法概述,并通过实验评估实施方法。深度学习框架包括易于使用的GNN模型,合成和现实世界数据,以及针对签名和定向网络的特定任务评估指标和损失功能。作为Pytorch几何形状的扩展库,我们提出的软件由开源版本,详细文档,连续集成,单位测试和代码覆盖范围检查维护。我们的代码可在\ url {https://github.com/sherylhyx/pytorch_geometric_signed_directed}上公开获得。
translated by 谷歌翻译
从成对比较中恢复全球排名从时间同步到运动队排名的广泛应用。对应于竞争中匹配的成对比较可以解释为有向图(Digraph)中的边缘,其节点代表例如竞争对手的排名未知。在本文中,我们通过提出所谓的Gnnrank,这是一种基于Digraph嵌入的基于训练的GNN框架,将神经网络引入排名恢复问题。此外,设计了新的目标来编码排名upsess/违规行为。该框架涉及一种排名得分估计方法,并通过展开从可学习的相似性矩阵构建的图形的fiedler矢量计算来增加电感偏差。广泛数据集的实验结果表明,我们的方法具有竞争性,并且通常对基准的表现卓越,并且表现出了有希望的转移能力。代码和预处理数据为:\ url {https://github.com/sherylhyx/gnnrank}。
translated by 谷歌翻译
通过整合人类的知识和经验,人在循环旨在以最低成本培训准确的预测模型。人类可以为机器学习应用提供培训数据,并直接完成在基于机器的方法中对管道中计算机中的难以实现的任务。在本文中,我们从数据的角度调查了人类循环的现有工作,并将它们分为三类具有渐进关系:(1)从数据处理中提高模型性能的工作,(2)通过介入模型培训提高模型性能,(3)系统的设计独立于循环的设计。使用上述分类,我们总结了该领域的主要方法;随着他们的技术优势/弱点以及自然语言处理,计算机愿景等的简单分类和讨论。此外,我们提供了一些开放的挑战和机遇。本调查打算为人类循环提供高级别的摘要,并激励有兴趣的读者,以考虑设计有效的循环解决方案的方法。
translated by 谷歌翻译
节点群集是网络分析的强大工具。我们介绍了一个图形神经网络框架,以自我监督的方式获得定向网络的节点嵌入,包括一种新型的概率不平衡损失,可用于网络群集。在这里,我们提出了与方向性密切相关的定向流量不平衡度量,即使簇之间没有密度差,也可以揭示网络中的簇。与文献中的标准方法相反,在本文中,方向性不被视为滋扰,而是包含主要信号。与现有的图形神经网络方法不同,DIGRAC优化了用于聚类的有向流动不平衡无需标签监督,并且与现有的光谱方法不同,并且可以自然合并节点特征。关于合成数据的广泛实验结果,以定向随机块模型的形式,以及不同尺度的现实世界数据,证明我们的方法基于流量不平衡,在比较时在有向图聚类上获得最先进的结果针对文献中的10种最先进方法,用于广泛的噪声和稀疏度,图形结构和拓扑,甚至超过监督方法。
translated by 谷歌翻译
State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards "small". Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance. Name template Description train-IG-I-1.5k Instagram training set of I images and ∼1.5k hashtags from ImageNet-1k. train-IG-I-8.5k Instagram training set of I images and ∼8.5k hashtags from WordNet. train-IG-I-17k Instagram training set of I images and ∼17k hashtags from WordNet. train-IN-1M-1k The standard ImageNet-1k ILSVRC training set with 1.28M images. val-IN-50k-1k The standard ImageNet-1k ILSVRC validation set with 50k images. train-IN-I-L Extended ImageNet training set of I images and L ∈ {5k, 9k} labels. val-IN-I-L Extended ImageNet validation set of I images and L ∈ {5k, 9k} labels. train-CUB-6k-200 The Caltech-UCSD Birds-200-2011 training set. val-CUB-6k-200 The Caltech-UCSD Birds-200-2011 validation set. train-Places-1.8M-365 The Places365-Standard training set (high-resolution version). val-Places-37k-365 The Places365-Standard validation set (high-resolution version). train-COCO-135k-80 The standard COCO detection training set (2017 version). val-COCO-5k-80 The standard COCO detection validation set (2017 version). test-COCO-20k-80 The standard COCO detection test-dev set (2017 version).Table 1: Summary of image classification datasets. Each dataset is named with a template, role-source-I-L, that indicates its role (training, validation, testing), source, number of images I, and number of labels L.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译